1  Introduction

This section introduces to general concepts of continuum mechanics, which provides the foundation to process-based modeling and simulation in engineering disciplines and the applied sciences. The first section discusses how frequently used terms, such as model, process and simulation are used in this lecture. The second section discusses computational process model development on a very general level with a special focus on the mathematical model at its core. Model development often follows deductive reasoning and aims at achieving certain model objectives in a requirement-driven and goal-oriented manner. This seems to contrast the typical structure of a hypothesis-driven scientific study, often found in the applied sciences, such that we close by discussing the role of research hypotheses in a classical computational model development cycle.

1.1 Model - Process - Simulation

The term model is not uniquely defined. Every scientific discipline knows its own models and these can be very different in nature. A model might refer to a miniaturized construction, a computer-aided-design, to a set of partial differential equations, or to an implemented software tool. Accordingly broad is the potential spectrum of a scientist describing him- or herself as ‘being a modeler’. In interdisciplinary projects one often experiences that the different self-conception and understanding of a ‘model’ and what is meant by ‘modeling’, or by ‘being a modeler’ can cause confusion. In order to avoid any ambiguities, the terminology needs to be clarified:

1.1.1 Model

We will refer to a model as a purposeful simplification and abstraction of a perception of reality, and elaborate on this definition from left to right:

Purposeful refers to the fact that a model is goal-oriented or task-driven and needs to suit a set of requirements in the context of the engineering task or the science study. The purpose of a model could for instance be to study the efficiency of melting-solidification cycles in a latent heat storage system, or it could refer to determining the noise emissions of a wind turbine of a certain engineering design. The purpose could also be to quantify glacial melting rates in response to a rising temperature. Effective model development requires a clear and concise definition of the model’s purpose. Real-world computational engineering projects often aim at multi disciplinary design optimization or life cycle management, which imply challenging multi stakeholder situation that require multi-purpose models.

Simplification refers to the fact that a model aims at reducing complexity with respect to its real-world counterpart. It is the simplification that enables a systematic investigation, e.g. the continuum assumption that captures the behaviour of a many-particle system in few differential equations. Simplifying assumptions are what make continuum mechanical models numerically sovable in the first place. Complexity reduction, however, comes at a price. A model that is built upon a collection of simplifying assumptions has a certain scope and must not be used beyond. An incompressible flow model, for instance, won’t be valid when applied to a physical regime characterized by significant density variations. This is intriguing, as the model can still be applied and its results might still look appealing. Yet, these results are no longer physical, when the model is applied outside its scope of validity. Though this sounds intuitive it is sometimes not trivial to check and see if the model is admissible for a a certain physical situation, or not.

Abstraction refers to the fact that a model aims at identifying processes and phenomena, which are of similar generic structure. This naturally results in one mathematical model being relevant for several, otherwise distinct real-world phenomena. Reversing abstraction means to interpret certain model properties in the context of a specific natural science process or engineering application. As an example, consider traffic flow and hazardous tsunamis. Both can be ‘abstracted’ into a non-linear hyperbolic system of differential equations. This family of mathematical models is known for its tendency to develop shock discontinuities. In the context of traffic flow, these discontinuities can be ‘interpreted’ as a traffic jam, whereas for a tsunami a suitable interpretation would be a breaking wave.

Perception of reality refers to the fact that when developing a model, we actually tend to design it towards our own version of the truth, hence our perception of reality. This raises the very relevant question: What is the reality and how do we prove correctness of a model? The first part of this question is somewhat philosophical. During this lecture, we will assume that whenever we talk about a model, the underlying perception of reality corresponds to the real truth. The second part of this question is briefly touched upon, when discussing model validation.

1.1.2 Process

Quite naturally, process-based modeling aims at developing and utilizing models to describe and quantify real-world processes in applied sciences and engineering applications. But what is a process? Similarly to ‘model’, the word ‘process’ is used in various contexts. They have in common that a process always refers to some kind of evolution, either sequentially, e.g. the steps of an algorithm, or continuously, e.g. diurnal temperature oscillations.

Modeling any kind of evolution requires our ability to describe the momentary situation, referred to as state or configuration of a system, e.g. the system’s current velocity or temperature field.

Translating our physical knowledge on balance laws allows us to translate a process as an evolving state into mathematical formalism. This is referred to as mathematical modeling or first principle modeling and results into a mathematical model. A mathematical model of a continuum mechanical process is often reflected by a partial differential equation.

Note

Deriving a mathematical model requires us to idealize, hence to simplify and abstract, both the process, and the state. Idealization will introduce model errors with respect to reality. The major challenge in mathematical modeling is to manage model errors considering the overall purpose and goals of the modeling task.

1.1.3 Simulation

A large number of continuum mechanical, mathematical process models exist that are of great relevance to many different applied sciences and engineering situations, see for instance (Fowler 1997). Many real-world applications, however, imply a setting that is too complex to allow for a direct analytical solution of the model. Then, the continuum-mechanical, mathematical model is just the starting point for yet another scientific journey that aims at developing and implementing a numerical solution strategy for the latter. The goal is then to find a numerical method that enables us to determine an approximation of the state and its evolution, consistent to the constraints prescribed by the mathematical model. The mathematical model in conjunction with the chosen numerical solution strategy now constitutes a computational model. A simulation, finally, refers to using the computational model in order to realize one or many specific scenarios or test cases. Of course the latter itself can also be a multi-parameter run or a Monte-Carlo simulation.

1.2 Computational model development

Now, as we know mathematical and computational models are, the natural next question is: How do we get these models? A specific model that is used for advanced simulation studies is often developed in a cyclic manner. The three key steps are typically shared by any model development effort, namely:

  1. mathematical modeling (green),
  2. discretization or numerical scheme development (blue), and
  3. verification and validation (red).
Figure 1.1: A generic computational model development cycle consisting of mathematical model formulation (green), development of a numerical scheme (blue), and verification as well as validation for quality assurance and to assess the model’s predictive power (red).

1.2.1 Mathematical modeling

Following the principle of Occam’s razor, we are interested in the simplest possible mathematical model capable of capturing the purpose of our study, hence the physical process we are interested in, say the rate of flow through porous media. This requires us to find a good compromise between the model complexity and its feasibility. A certain level of complexity is clearly needed to study the specific aspect that we are interested in, compare Section 1.1.1. Idealization and simplificty, on the other hand, turn out to be beneficial, e.g when it comes to model calibration. Simpler models, for instance, tend to be less cumbersome to solver numerically and will generally have a lower number of model parameters. Typical steps when formulating a mathematical model are:

Choice of the right scale: We need to decide on an appropriate scale, hence the level of detail at which we want to model the process. This requires knowledge of the relevant spatial and temporal scales and the material(s) involved. Often, a dimensional analyzes helps to identify dominant processes.

Exploitation of physical principles: Physical principles allow to formulate balance laws, e.g. conservation of mass or momentum, in terms of differential equations that constrain the spatio-temporal evolution of the state.

Model closure and well-posedness: We have to make sure that the mathematical model provides enough information to be solved, and exhibits a meaningful solution. This is referred to as ‘well-posedness’, comprising existence, uniqueness, and stability of the solution. Verifying well-posedness is a non-trivial task and generations of mathematicians devoted and still devote their work to answer this question. In practice, a prerequisite to well-posedness is often to have ‘just the right amount of information’, hence as many equations as unknowns in the system. Closure relations are used to balance information and unknowns.

Analytical model complexity reduction: Sometimes the balance laws itself are still too complex or compute-intensive to solve, and need to be further simplified, e.g. in order to reduce the number of unknowns. We will refer to this as a generic model complexity reduction, examples being homogenization or depth-integration.

1.2.2 Discretization and numerical solution

Modeling a realistic scenario often means to deal with a level of complexity that prohibits any analytical solution. Even if the mathematical model was analytically solvable, say because we focus on a process that is captured by a simple diffusion equation such as a heat conduction problem, the geometric setting often implies the need for a computational approach. This requires to decide for an appropriate numerical method, e.g. finite elements, or finite volume methods, and subsequently to decide on an appropriate software framework. This lecture is not about the numerical solution of partial differential equations, however, we will discuss implications for using specific continuum mechanical models in simulation studies, and draw cross-links as most of the participants will have a solid background in numerical methods for PDEs.

1.2.3 Verification and validation

Once a computational model is developed and implemented it has to be checked for formal correctness (verification), as well as for physical consistency (validation).

Verification, convergence and other formal correctness measures: A verified computational model approximates the true solution to the underlying mathematical model (assuming this exists) within a well-defined error bound, the so-called approximation error. Verification quality can be assessed by comparing the computed solution against an analytical solution to the mathematical model. Analytical solutions can either have a physical significance or they might also be artificial solutions, so-called manufactured, derived solely for the purpose of verification and code testing.

A grid convergence constitutes a specific verification measure that is often used in practice. It aims at investigating, how the approximation error decreases as we increase the resolution of the computational grid. If, for instance, a numerical scheme for a free-surface flow model is implemented based on a Godunov type finite volume scheme with linear reconstruction and a second order Runge-Kutta time integration, the convergence study should result in a quadratically decreasing error. Any deviation from this indicates a likely bug in the code. In complex scenarios, for which no analytical solution is available, we can still conduct an empirical grid convergence study. In that case, we compare against a simulation result acquired on a very fine spatial grid.

In addition to these rather standardized grid convergence studies, there are also case and scheme dependent correctness measures that test for known properties and structure in the mathematical model, for instance mass and momentum conservation, positivity preservation, or a correct prediction for a free-surface fluid at rest. Checking for these properties can be done after each time step or at the final time, and constitutes a further correctness measure of the code.

Validation: Key aspect to any kind of computational model development is a rigorous assessment of the model’s validity. While a ‘verified’ computational model is consistent to its underlying mathematical model formulation (within a certain and well understood error range) this does not guarantee that we are capturing the right physics! The remaining question hence is, whether the mathematical model reflects the process we are interested in correctly. This is done by validating the model. An intuitive approach to model validation is to compare against high confidence data, either from lab experiments or field observations. Regardless of whether we compare against lab or field data, this data differs in spatio-temporal coverage, scope (type of the observable might differ from the states we are predicting), and quality (sensor accuracy), which makes validation results challenging to interpret. In these cases it sometimes makes sense to validate against synthesized data sets, hence against artificial data generated from computational models at higher fidelity levels.

A remark on model calibration:

Finally, we need to delimit model validation from model calibration in order to avoid any source of confusion: Many (not all) computational models for applied sciences and engineering applications exhibit empirical model parameters that cannot be determined a priori, but need to be calibrated. Depending on the community this step is also called parameter identification. In the case of geohazards modeling, for instance, it would for be the friction parameters that need to be calibrated. In the case of gear box modeling it could be parameters associated with erosion rates within the bearing. Model calibration hence constitutes an optimization problem that aims at optimal parameter values to minimize this deviation between simulation results and data (while model validation aims at quantifing the deviation between simulated results of the calibrated model and data). Calibration is typically conducted on a training data set, and followed by a subsequent investigation of the model’s prognostic performance on an (independent) validation data set.

1.3 Modeling objective versus research hypothesis

This final section is devoted to the observation that computational model development studies and articles are often presented in a deductive manner, whereas other science publications are often hypothesis-driven. The provocative question is: Are there no research hypotheses when developing computational models?

The answer of course is no. Oftentimes, however, the hypotheses are not explicitly mentioned, but implicitly hidden in the model’s underlying assumptions. The implicit statement then is: If we rely on the presented collection of simplifying assumptions, and choose the presented numerical solution strategy, we yield a computational model capable of simulating the process we are interested in at the required quality. This allows an interesting new perspective on computational models in general:

Rather than thinking of a computational model as a set of equations, or a powerful code, we can also think of it as a collection of design decisions during its development. It could be the decision to conduct an analytical spatial complexity reduction technique, or to apply a specific numerical scheme, or even to rely on a certain data set for the parameter identification. Each of the decisions is associated with a certain limitation of the final computational model’s predictive scope. If relying on a homogenized mathematical model, we can not spatially resolve our process below a certain scale. Arising uncertainties need to be systematically investigated and managed, either separately or collectively, regarding their impact on the predictive quality of the computational model.